本文解决了从第三人称角度捕获的单个图像中的目光目标检测问题。我们提出了一个多模式的深度建筑,以推断一个人在场景中所处的位置。该空间模型经过了代表丰富上下文信息的感兴趣人,场景和深度图的头部图像训练。我们的模型与几种先前的艺术不同,不需要对目光角度的监督,不依赖头部方向信息和/或利益人眼睛的位置。广泛的实验证明了我们方法在多个基准数据集上的性能更强。我们还通过改变多模式数据的联合学习来研究我们方法的几种变体。一些变化的表现也胜过一些先前的艺术。首次在本文中,我们检查了域名的凝视目标检测,并授权多模式网络有效地处理跨数据集的域间隙。该方法的代码可在https://github.com/francescotonini/multimodal-across-domains-domains-domains-domains-domains-warget-detection上获得。
translated by 谷歌翻译
情绪识别涉及几个现实世界应用。随着可用方式的增加,对情绪的自动理解正在更准确地进行。多模式情感识别(MER)的成功主要依赖于监督的学习范式。但是,数据注释昂贵,耗时,并且由于情绪表达和感知取决于几个因素(例如,年龄,性别,文化),获得具有高可靠性的标签很难。由这些动机,我们专注于MER的无监督功能学习。我们考虑使用离散的情绪,并用作模式文本,音频和视觉。我们的方法是基于成对方式之间的对比损失,是MER文献中的第一次尝试。与现有的MER方法相比,我们的端到端特征学习方法具有几种差异(和优势):i)无监督,因此学习缺乏数据标记成本; ii)它不需要数据空间增强,模态对准,大量批量大小或时期; iii)它仅在推理时应用数据融合; iv)它不需要对情绪识别任务进行预训练的骨干。基准数据集上的实验表明,我们的方法优于MER中应用的几种基线方法和无监督的学习方法。特别是,它甚至超过了一些有监督的MER最先进的。
translated by 谷歌翻译
这项工作对最近的努力进行了系统的综述(自2010年以来),旨在自动分析面对面共同关联的人类社交互动中显示的非语言提示。专注于非语言提示的主要原因是,这些是社会和心理现象的物理,可检测到的痕迹。因此,检测和理解非语言提示至少在一定程度上意味着检测和理解社会和心理现象。所涵盖的主题分为三个:a)建模社会特征,例如领导力,主导,人格特质,b)社会角色认可和社会关系检测以及c)群体凝聚力,同情,rapport和so的互动动态分析向前。我们针对共同的相互作用,其中相互作用的人永远是人类。该调查涵盖了各种各样的环境和场景,包括独立的互动,会议,室内和室外社交交流,二元对话以及人群动态。对于他们每个人,调查都考虑了非语言提示分析的三个主要要素,即数据,传感方法和计算方法。目的是突出显示过去十年的主要进步,指出现有的限制并概述未来的方向。
translated by 谷歌翻译
Artificial intelligence(AI) systems based on deep neural networks (DNNs) and machine learning (ML) algorithms are increasingly used to solve critical problems in bioinformatics, biomedical informatics, and precision medicine. However, complex DNN or ML models that are unavoidably opaque and perceived as black-box methods, may not be able to explain why and how they make certain decisions. Such black-box models are difficult to comprehend not only for targeted users and decision-makers but also for AI developers. Besides, in sensitive areas like healthcare, explainability and accountability are not only desirable properties of AI but also legal requirements -- especially when AI may have significant impacts on human lives. Explainable artificial intelligence (XAI) is an emerging field that aims to mitigate the opaqueness of black-box models and make it possible to interpret how AI systems make their decisions with transparency. An interpretable ML model can explain how it makes predictions and which factors affect the model's outcomes. The majority of state-of-the-art interpretable ML methods have been developed in a domain-agnostic way and originate from computer vision, automated reasoning, or even statistics. Many of these methods cannot be directly applied to bioinformatics problems, without prior customization, extension, and domain adoption. In this paper, we discuss the importance of explainability with a focus on bioinformatics. We analyse and comprehensively overview of model-specific and model-agnostic interpretable ML methods and tools. Via several case studies covering bioimaging, cancer genomics, and biomedical text mining, we show how bioinformatics research could benefit from XAI methods and how they could help improve decision fairness.
translated by 谷歌翻译